专利摘要:
IMPROVED OCR FOR AUTOMATED STB TESTING The present order provides a user configurable test system for signal converter (STB) sets and other consumer devices providing video output. In particular, it provides a method of improving an Optical Character Recognition (OCR) process in such test systems.
公开号:BR112013013113B1
申请号:R112013013113-6
申请日:2011-12-22
公开日:2022-02-01
发明作者:Liam Friel
申请人:Accenture Global Solutions Limited;
IPC主号:
专利说明:

FIELD
[001] The present application generally concerns automated testing of signal converter devices (STB) and other audiovisual equipment. BACKGROUND
[002] A signal converter device (STB) also known as a digibox or converter unit (STU) is a device that connects to a television and an external signal source, changing the signal to content that can then be delivered as a audiovisual (A/V) signal for display on the television screen or other A/V device. More often than not, the external signal source is provided through a satellite or cable connection.
[003] As with other consumer products, both manufacturers and suppliers are interested in ensuring that products operate correctly and as specified. Initially and to this day, a significant part of the testing is performed manually, whereby a tester issues a command to the STB which can either be via the user interface on the STB itself or via a remote control device as illustrated in figure 1, and observes the response of the STB on a TV screen. As shown in Figure 1, a typical STB 10 has several signal inputs including an RF signal 14, which, for example, may be from a satellite or cable connection. An A/V signal 16 may also be provided as an input allowing the signal converter apparatus to feed via a signal to a television from a VCR, DVD, Blu-ray Disc, Media Juke Box or other similar device. The television output is an A/V 18 signal that can be provided through a variety of standard interfaces including SCART and HDMI. To allow the user to control the operation of the STB, several keys and similar controls can be provided on the STB itself. Additionally, and most commonly employed by users, the STB may have an infrared (IR) or wireless remote control input configured to work with a remote control device 12.
[004] As manual testing can be time consuming, error prone and in some cases lacking in accuracy, an effort has been made to automate some of these tests. In relation to these automated tests, it will be appreciated that this test is typically performed on the final product by users without necessarily any detailed knowledge or access to the STB's internal circuitry. In this way, STB testing is generally performed on a “black box” basis, where only the inputs and outputs are available for modification and examination. In this way test methods and systems have been developed specifically for signal converter devices, an example of such a system is StormTest™ provided by the S3 Group of Dublin, Ireland, which can be used to test STBs, televisions and similar devices such as digital media players. An arrangement for an STB test system is shown in Figure 2. The STB test system 20 comprises a controller 28 that manages the test function and interacts with the other features of the STB test system. Other features include an output interface for controlling a remote control device 12, allowing commands to be sent to the STB, and an input interface for receiving video and/or audio signals from the STB 10. The input may include an audio capture device 24 to accept the audio as a test signal and/or a video imager 22 or similar device to accept the video frames from the STB. The captured data is then made available to a processor for analysis, which in turn produces a test result and provides this result to a user. Thus, during a typical test, the test system issues a command or sequence of commands to the STB, suitably through the remote control interface. Each frame of video and/or audio is captured and made available to the test system for analysis of the STB's response to commands issued to it.
[005] Typically, tests may include generating a “change channel” command to the STB followed by analyzing the audio and/or video outputs to ensure that the change channel command was received and correctly executed by the STB.
[006] Signal converter devices are complex devices containing a powerful built-in main CPU and various peripheral devices. They generally run a sophisticated operating system (eg Linux, or VxWorks) and perform complex functions, and they run a large and complex software stack on that operating system. These devices generally present users with a sophisticated graphical user interface with menus and on-screen graphics. Test automation typically involves, among other areas, optical character recognition (OCR) to read text from the screen and compare it to a known or expected value to determine if the user interface presented to the user is as expected.
[007] As shown in Figure 2, OCR can be performed on a captured frame or on a section of a frame. Typically, when setting up a test routine, the person setting up the test will identify the section of the frame where text is to be found. The person can also configure the test by defining the expected result of the OCR process. The expected result can be a known predetermined value or it can be obtained elsewhere in the test routine. Additionally, the OCR process can be employed to obtain and store information from the device under test. OCR is generally performed by a software component known as an “OCR Engine”. There are several OCR engines (software product) generally available for this purpose, that is, the engines can process images and extract text from them. However, there are several practical difficulties in using an OCR engine in the context of a frame captured from an STB or a device with a display, for example, a digital television. One of the reasons for this is that OCR engines conventionally come from the area of document processing, which has several differences for images captured from a video stream. The resolution of document scanners used to capture document images is generally much higher than the resolution of images, even high definition images, captured from video.
[008] Documents generally have a fixed foreground/background contrast pattern: generally dark text on a light background. Furthermore, the contrast for a given document page is not variable. Once the OCR engine has determined the foreground and background colors for a page, they will not change. So, for example, it is known to use filters to optimize an OCR engine, for example, where the page color is not white. However contrast in modern dynamic user interfaces can be highly variable. As an example, transparent panes in the user interface are common and so the text contrast will vary as the background television program changes. To make matters worse, images captured from video streams can be very noisy. As a result, the experience of the present inventor is that the accuracy of a typical OCR engine when employed with captured video can only be in the region of 65%-90%.
[009] It would be beneficial to improve the performance of OCR mechanisms when analyzing video images. SUMMARY
[0010] In particular, the present application provides systems and methods according to the claims that follow. DESCRIPTION OF DRAWINGS
[0011] The present application will now be described with reference to the attached drawings, in which:
[0012] Figure 1 is an illustration of an exemplary STB known in the art;
[0013] Figure 2 is an illustration of a conventional prior art STB testing system having OCR;
[0014] Fig. 3 is a block diagram of an aspect of an STB test system in accordance with an embodiment of the present application;
[0015] Figure 4 illustrates an exemplary arrangement for selecting a filter configuration according to another embodiment; and
[0016] Figure 5 illustrates a method for use in the arrangement of Figure 4. DETAILED DESCRIPTION
[0017] The present application is based on the premise that developing an OCR engine specifically for captured video images can be an expensive and time-consuming process. Instead, it is desirable to provide a method that improves the performance of existing OCR engines, for example, that are intended for scanned documents.
[0018] The present application improves the performance of OCR mechanisms on captured video frames or sections thereof. The enhancement is achieved by pre-processing the image (the captured frame or a part of it) before submission to the OCR engine. In particular, it has been found by the present inventor that by processing an image with an image filter the performance of the OCR engine can be improved. The difficulty is that while certain image filters may work perfectly in certain situations they can result in worse performance in others.
[0019] An exemplary test system for testing an STB generally may employ the known STB test system of Figure 2 and thus may include a first interface to control the operation of a signal converter apparatus, for example, while send commands to the STB via an IR remote. However, it will be noticed that other inputs/control interfaces can be employed; for example, a direct serial connection may be employed if the STB has such an interface available. It will be noticed that an interface used to control the STB is commonly employed in conventional test systems for STBs and thus its design and operation are easily understood and familiar to those skilled in the art. The test system is configured to analyze video from the STB along with other outputs including, for example, audio.
[0020] Specifically a second interface is employed to obtain one or more outputs from the STB. This second interface may include a video image capturer to capture video frames from the STB and/or an analog to digital converter to capture the audio output from the STB. It will be appreciated that the technology associated with these elements would also be familiar to those skilled in the art. However, properly the video image capturer is synchronized with video frame sync, which allows it to capture a full video frame at a time. It will be appreciated that where a digital output of the signal converter apparatus is available, the requirement for a video image capturer/analog audio capture device can be removed and replaced by a digital interface that directly captures the video frames being transmitted. . Where the device under test has an integrated display for displaying video, video can be captured using a video camera directed at the display. As with existing systems, the system can select a particular region of interest from the frame. The region of interest would typically be predefined during the process of setting up a test routine.
[0021] In the exemplary embodiment illustrated in figure 3 it can be seen that the frame capture and mechanismOCR processes are unchanged and instead an image processor (filter) 40 is provided to pre-filter the image (which as explained previously may be the entire captured frame or more likely a pre-selected section of the captured frame) before processing by the OCR engine to extract text from the image.
[0022] The image processor filter 40 is a configurable filter such that the filter function applied to the image can be varied by the system. More specifically, the configuration for the filter for an image at a particular point in a test routine is properly preset during an initial configuration process where the test routine executed by the test system is prepared.
[0023] The mode of operation will now be explained with reference to the method of determining a configuration for the configurable filter as shown in figure 4, wherein the configurable filter 40 comprises a sequence of different image filters 42a-42f that can be applied sequentially to an image 44 of a captured frame or part thereof. As will be seen in the explanation that follows, each of the filters in the sequence may or may not be used to filter the image. Thus, the total filter can be configured by selecting which particular filters will be used. Additionally, each filter can be configurable via one or more parameters (shown by the numbers 0-5 for each filter), which adjust the filter characteristics. One of these parameters, for example 0, may indicate that the filter is not to be used. After filtering has been completed, the filtered image is passed to the OCR engine 24 for processing, where the recognized text can be compared with an expected result to determine the success or failure of a particular test. In certain circumstances, there may not be an expected result, in which case the result can simply be stored for reference or use subsequently in the test routine. This same routine can also be employed in the process to determine the filter configuration as described in more detail below.
[0024] As examples, the following image filters have been determined to improve the accurate detection of text by the OCR engine when operating on color images: • Selectively remove one or more color components from the original image, with or without conversion to grayscale of the resulting image;• Adjust image contrast; • Invert the colors in the image; • Blur the image; • Emphasize the image; • Zoom the image so that it is enlarged, interpolating pixels from the original image to create the new image.
[0025] It will be noticed, however, that each of these can be considered as an image filter. It will be noticed that a general parameter for a filter can be whether or not it is used. A specific parameter in the case of removing one or more color components would be the color components to be removed, and so in the case of a VVA (Red Green Blue) image the filter settings can be removal of: a) Vb) Vc) Ad) VVe) VAf) VA
[0026] In the case of adjusting image contrast, an individual parameter can be whether to increase or decrease the contrast or the amount of contrast adjustment. Similarly, in the case of obscuring or emphasizing the degree of obscuration or highlighting would be an individual filter parameter. In the case of an image filter to “zoom in” the image so that it is enlarged, individual parameters can be the degree of scaling and/or the selection of a particular type of interpolation, e.g. bicubic, linear, etc.
[0027] An exemplary method of selecting a configuration for the filter is shown in figure 5. In this exemplary method a user when configuring the test system, and as it would be performed conventionally, can optionally identify a section of the screen (captured frame) to be analyzed. The user specifies the expected result of the OCR process. The expected result is used to compare the outputOCR result of different filter settings in an image sequence to determine the performance of the different filter settings. Each of the image sequence contains substantially the same text content, that is, a user sees the same text in each image in the sequence. For example, capturing 100 sequential frames of video having an OSD (On Screen Display) box displayed would have the same text content visible to a user in the box in each of the on-screen frames. It will be noticed, however, that a bit to bit comparison of captured frames may well turn out to be completely different for reasons explained earlier including noise and possibly changing video content behind the OSD box. Accordingly, a sequence of images is taken as an initial step and the same sequence of images is used to compare the performance of each filter setting as on-screen content may finish waiting after a few seconds and an OSD may disappear entirely. It will be appreciated that the image sequence may be obtained as part of this process or it may have been obtained separately and stored.
[0028] A first filter setting is selected at 52 to be used to filter the first selected image in the sequence at 54. In the exemplary arrangement of Figure 4, a series of filters are applied to the image in a chain. The first filter setting can have a setting of “null” for each filter that does not change the image. Pictorially, in figure 4, this is represented by the “0” setting for a filter. This first filter setting is used to establish the filter and the first image is processed at 56 by the filter using this setting to provide a filtered image. The filtered image is then passed at 58 by the OCR engine and the extracted text is compared at 60 with the expected result to determine the performance of the filter. It will be noticed that the expected result need not be the total of the extracted text, but can in fact be a part of the text. For example, the person setting up the test may simply require the presence of a particular word in the extracted text rather than an exact match to the total extracted text. A decision is then made at 62 based on the result, where the filter performance is negative (ie text not recognized correctly), so the next filter setting can be selected at 52 and the test is run again. Where the filter performance is determined to be positive, the test can be repeated for the next image in the sequence at 54. Where the performance of a particular filter setting is positive for all images in the sequence, then that setting can be selected and stored at 64 within the test routine for future use in a test process. At which point the setup process for that aspect of the test is complete.
[0029] Alternatively, all filter settings can be tested with the accuracy of each one being determined (i.e. while from the image sequence, one filter resulted in the OCR producing the expected result) and the filter with the best accuracy being selected .
[0030] Using this method, it is possible to choose a filter setting that results in a correct match to the recognized text.
[0031] Although the previous description refers to a sequence of images having substantially the same text content, it will be noticed that the sequence may not be the actual sequence of frames and the training set (image sequence) may be, for example , captured frames chosen at separate intervals.
[0032] It will be noticed that where the method is configured to select the first filter configuration that results in 100% accuracy that the filter configurations to be tested can be chosen in random order via traditional Monte Carlo methods, thus avoiding sub-ideal blocks locally from filters if filters are tried in strict order of definition.
[0033] In another variation, it is also possible to generate the training set of the live signal, by capturing “live” images and accumulating these captured images in local storage. As long as each new image that arrives is recognized correctly, there is no need to retrain with all the existing captured images (since these, by definition, will also have matched). However, once an image that is captured does not match the expected text, the captured images form the training set, and the algorithm starts looking for a better filter using the captured images. Where an additional image does not match the expected result with a filter, it can be added to the training set until a suitable training set has been selected.
[0034] This training can be limited in time (so that if all images match the expected text, for a user-defined time period, the current best filter (setting) is evaluated as good enough and saved).
[0035] Given an image or set of images, it is possible for the system to automatically determine with a high degree of accuracy the expected text without user input. This is based on the observation that while misrecognized text is typically random, correctly recognized text is always the same. Therefore, the system can “assume” that the string that appears most frequently in the recognition results is likely to be the text the user wanted, and in most cases it will be correct. So while the method mentioned earlier refers to the user inputting an “expected” text result, it may not require the user to do this. In an alternative variation, the text extracted from a first image in a sequence can be used as the “expected” result for the rest of the images in the sequence, i.e., result consistency is determined to match result precision. Of course, it is possible that the OCR process may consistently fail to recognize the text correctly and that this alternate variation may not be appropriate.
[0036] It is thus possible, without user intervention of any kind, to determine a set of image processing filters to apply to the captured image that will improve recognition accuracy.
[0037] It will be appreciated that although the present application has been described generally with respect to testing STBs, it is equally applicable to testing other devices such as digital televisions, digital media players, DVD players and consumer devices such as such as mobile phones and PDAs. It will be appreciated that while digital televisions, digital media players and DVD players may have a remote control input (eg IR) to receive test commands from the test system, other devices may require a different interface on the test system to send test commands to the device under test.
[0038] Furthermore, it will be appreciated that the presently described techniques can also be employed directly without a test setup process. In particular, although the previously mentioned method has been described in relation to using an initial setup routine to set/store the correct filter parameters to run a particular test. The method can also be used in a live scenario to determine text content from a sequence of captured images. In such an arrangement, a sequence of captured frames (or parts thereof) can be passed through the configurable filter using a filter setting first and then through the OCR engine to provide a text result. Where the text result is consistent for all captured frames (or a significant proportion of them), the text result can be considered valid. Where the text is not consistent, the process can be repeated with a different filter setting. This process can be repeated, varying the setting each time, until a valid result is determined. It will be appreciated that this process can be of general use for video captured from a device under test and can be used in a general way to identify text in video content.
权利要求:
Claims (6)
[0001]
1. Method for determining a suitable setting for a filter to pre-process captured video images before performing OCR, the method characterized in that it comprises the steps of: a) providing a captured sequence of images containing the same content, the image sequence comprises a first image in the sequence and a plurality of subsequent images in the sequence; b) selecting a configuration for a filter to pre-process the images, from a plurality of available configurations; c) pre-process the first image in the sequence using this filter configuration and subjecting the first pre-processed image to an OCR process to extract text from the first pre-processed image; d) pre-processing an image from the plurality of subsequent images in the sequence using the selected filter configuration and subjecting the pre-processed image of the plurality of subsequent images to an OCR process to extract text from the pre-processed image of the plurality of subsequent images; e) analyzing the text extracted from the first pre-processed image and pre-processed image from the plurality of subsequent images to determine the performance of the filter, where the performance is determined to be positive if the extracted text is determined to be correct and wherein the determination of whether the extracted text is correct is made by comparing the texts extracted from the first image in the sequence and the text extracted from the image of the plurality of subsequent images in the sequence; and f) deciding that the configuration is adequate and stores the proper configuration, where steps d) and e) are performed on each image of the plurality of subsequent images in the sequence and determining whether the proper configuration is done when the performance of the configuration of selected filter is determined to be positive for each of the plurality of subsequent images; wherein steps d) and e) are repeated for each successive image in the sequence as long as the extracted text is determined to be correct from the preceding image and when the text extracted from an image of the plurality of subsequent images is determined not to be correct, select a different configuration for image preprocessing, and repeat steps c) to f).
[0002]
2. Method according to claim 1, characterized in that steps b) to e) are repeated for a plurality of different configurations, wherein the configuration comprises selecting one or more types of filters from a plurality of types. of different filters or the configuration comprises selecting a parameter for a filter, or both.
[0003]
3. Method according to claim 1 or 2, characterized in that the filter comprises a sequence of different filters that are applied in sequence and the configuration comprises the selection of one or more parameters for each filter.
[0004]
4. Test system for analyzing a captured video frame from a device under test, the system characterized in that it comprises: an image processor comprising the filter for filtering the captured video frame or region thereof to provide a filtered image; an OCR engine for parsing the filtered image to identify text in the image; wherein the system is configured to perform the method as defined in any one of claims 1 to 3; a remote control interface for transmitting commands to the device under test; in that the filtering performed by the image processor is configured using the appropriate setting given.
[0005]
5. Test system, according to claim 4, characterized in that the filter is adapted to perform one or more of the following in the image: a) selectively remove one or more color components, with or without conversion to scale of gray;b) adjust image contrast;c) invert colors;d) obscure;e) emphasize the image; and f) zoom in on the image so that it is enlarged using interpolation.
[0006]
6. Test system according to claim 4 or 5, characterized in that a filter having a plurality of different filter settings is employed by the image processor and wherein the at least one setting value defines the settings of filter to be used to filter the captured video frame or region of it.
类似技术:
公开号 | 公开日 | 专利标题
BR112013013113B1|2022-02-01|Method for determining a suitable setting for a filter to pre-process captured video images before running OCR and test system to analyze a captured video frame from a device under test
US9875409B2|2018-01-23|Abnormality detection apparatus, abnormality detection method, and recording medium storing abnormality detection program
US9258458B2|2016-02-09|Displaying an image with an available effect applied
JP2011232111A|2011-11-17|Inspection device and fault detecting method used for inspection device
JP2009211275A|2009-09-17|Image processor, image processing method, program, and storage medium
CN107209922A|2017-09-26|Image processing equipment, image processing system and image processing method
WO2013118955A1|2013-08-15|Apparatus and method for depth map correction, and apparatus and method for stereoscopic image conversion using same
US20190362478A1|2019-11-28|Machine learning techniques for increasing color consistency across videos
CN104795011A|2015-07-22|EDID |-integrated display frame deviation detecting system and using method
US8803998B2|2014-08-12|Image optimization system and method for optimizing images
CN110414346A|2019-11-05|Biopsy method, device, electronic equipment and storage medium
CN107797784B|2020-04-03|Method and device for acquiring adaptive resolution of splicing processor
FR3018118A1|2015-09-04|METHOD FOR TESTING AN ELECTRONIC SYSTEM
Patel et al.2015|An improvement of forgery video detection technique using Error Level Analysis
KR20120121748A|2012-11-06|An apparatus and a method for setting a setting information of a camera using a chart
TWM583989U|2019-09-21|Serial number detection system
JPWO2014013792A1|2016-06-30|Noise evaluation method, image processing apparatus, imaging apparatus, and program
US10964022B2|2021-03-30|Image processing method, corresponding image processing apparatus and endoscope arrangement
KR101076478B1|2011-10-25|Method of detecting the picture defect of flat display panel using stretching technique and recording medium
JP2016127505A|2016-07-11|Image processing apparatus and image processing method
US11070891B1|2021-07-20|Optimization of subtitles for video content
CN111263140B|2021-08-27|Apparatus, system and method
RU2547703C2|2015-04-10|Method, apparatus and computer programme product for compensating eye colour defects
CN106993219B|2020-03-17|Video signal comparison method and device
JP2020003219A|2020-01-09|Defect inspection device for display device
同族专利:
公开号 | 公开日
GB201020101D0|2011-01-12|
CA2831113C|2017-05-09|
EP2643973B8|2020-11-11|
CA2831113A1|2012-05-31|
GB2485833A|2012-05-30|
US20170048519A1|2017-02-16|
EP2643973A1|2013-10-02|
BR112013013113A2|2017-10-17|
WO2012069662A1|2012-05-31|
EP2643973B1|2018-03-07|
DK2643973T3|2018-06-14|
US9942543B2|2018-04-10|
WO2012069662A8|2013-01-10|
US9516304B2|2016-12-06|
US20130347050A1|2013-12-26|
MY164774A|2018-01-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JPH0358181A|1989-07-26|1991-03-13|Nec Corp|Optical character reader|
US5014328A|1990-07-24|1991-05-07|Eastman Kodak Company|Automatic detection and selection of a drop-out color used in conjunction with optical character recognition of preprinted forms|
US5262860A|1992-04-23|1993-11-16|International Business Machines Corporation|Method and system communication establishment utilizing captured and processed visually perceptible data within a broadcast video signal|
US5647023A|1994-07-28|1997-07-08|Lucent Technologies Inc.|Method of nonlinear filtering of degraded document images|
US5805747A|1994-10-04|1998-09-08|Science Applications International Corporation|Apparatus and method for OCR character and confidence determination using multiple OCR devices|
US6658662B1|1997-06-30|2003-12-02|Sun Microsystems, Inc.|Retrieving information from a broadcast signal|
US6731788B1|1999-01-28|2004-05-04|Koninklijke Philips Electronics N.V.|Symbol Classification with shape features applied to neural network|
US6295387B1|1999-05-27|2001-09-25|Lockheed Martin Corporation|Method and apparatus for determination of verified data|
CN1459761B|2002-05-24|2010-04-21|清华大学|Character identification technique based on Gabor filter set|
JP4320438B2|2003-06-06|2009-08-26|独立行政法人国立印刷局|Character string extraction processing device for printed matter|
US7664323B2|2005-01-28|2010-02-16|Microsoft Corporation|Scalable hash-based character recognition|
WO2007052100A2|2005-02-15|2007-05-10|Dspv, Ltd.|System and method of user interface and data entry from a video call|
SG10201403541UA|2005-06-10|2014-09-26|Accenture Global Services Gmbh|Electronic vehicle indentification|
US9171202B2|2005-08-23|2015-10-27|Ricoh Co., Ltd.|Data organization and access for mixed media document system|
GB2445688A|2005-09-01|2008-07-16|Zvi Haim Lev|System and method for reliable content access using a cellular/wireless device with imaging capabilities|
JP2007228220A|2006-02-23|2007-09-06|Funai Electric Co Ltd|Built-in hard diskdrive television receiver and television receiver|
US7664317B1|2006-03-23|2010-02-16|Verizon Patent And Licensing Inc.|Video analysis|
US7787693B2|2006-11-20|2010-08-31|Microsoft Corporation|Text detection on mobile communications devices|
US7995097B2|2007-05-25|2011-08-09|Zoran Corporation|Techniques of motion estimation when acquiring an image of a scene that may be illuminated with a time varying luminance|
CN101571922A|2008-05-04|2009-11-04|中兴通讯股份有限公司|Character recognition tool for mobile terminal automation testing and method thereof|
GB2470417B|2009-05-22|2011-08-03|S3 Res & Dev Ltd|A test system for a set-top box|
CN102023966B|2009-09-16|2014-03-26|鸿富锦精密工业(深圳)有限公司|Computer system and method for comparing contracts|
US8386121B1|2009-09-30|2013-02-26|The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration|Optimized tuner selection for engine performance estimation|
US9152883B2|2009-11-02|2015-10-06|Harry Urbschat|System and method for increasing the accuracy of optical character recognition |
US8737702B2|2010-07-23|2014-05-27|International Business Machines Corporation|Systems and methods for automated extraction of measurement information in medical videos|
JP5747485B2|2010-11-22|2015-07-15|株式会社リコー|Information processing apparatus, information processing method, and program|
GB2485833A|2010-11-26|2012-05-30|S3 Res & Dev Ltd|Improved OCR Using Configurable Filtering for Analysing Set Top Boxes|
US20120162440A1|2010-12-23|2012-06-28|The Directv Group, Inc.|System and method for performing an automated set top box test|WO2010092585A1|2009-02-16|2010-08-19|Communitake Technologies Ltd.|A system, a method and a computer program product for automated remote control|
US9836376B2|2009-09-24|2017-12-05|Contec, Llc|Method and system for automated test of end-user devices|
GB2485833A|2010-11-26|2012-05-30|S3 Res & Dev Ltd|Improved OCR Using Configurable Filtering for Analysing Set Top Boxes|
US10108515B2|2013-03-01|2018-10-23|Sony Interactive Entertainment LLC|Remotely testing electronic devices using messaging and presence protocol|
GB2514410A|2013-05-24|2014-11-26|Ibm|Image scaling for images including low resolution text|
US10318804B2|2014-06-30|2019-06-11|First American Financial Corporation|System and method for data extraction and searching|
US9460503B2|2015-02-02|2016-10-04|Arris Enterprises, Inc.|Automated video testing using QR codes embedded in a video stream|
US10277497B2|2015-09-25|2019-04-30|Contec, Llc|Systems and methods for testing electronic devices using master-slave test architectures|
US9810735B2|2015-09-25|2017-11-07|Contec, Llc|Core testing machine|
US9960989B2|2015-09-25|2018-05-01|Contec, Llc|Universal device testing system|
US10122611B2|2015-09-25|2018-11-06|Contec, Llc|Universal device testing interface|
US10291959B2|2015-09-25|2019-05-14|Contec, Llc|Set top boxes under test|
US10320651B2|2015-10-30|2019-06-11|Contec, Llc|Hardware architecture for universal testing system: wireless router test|
US20170126536A1|2015-10-30|2017-05-04|Contec, Llc|Hardware Architecture for Universal Testing System: Cable Modem Test|
US9992084B2|2015-11-20|2018-06-05|Contec, Llc|Cable modems/eMTAs under test|
US9838295B2|2015-11-23|2017-12-05|Contec, Llc|Wireless routers under test|
US9900116B2|2016-01-04|2018-02-20|Contec, Llc|Test sequences using universal testing system|
US9900113B2|2016-02-29|2018-02-20|Contec, Llc|Universal tester hardware|
US10462456B2|2016-04-14|2019-10-29|Contec, Llc|Automated network-based test system for set top box devices|
US10779056B2|2016-04-14|2020-09-15|Contec, Llc|Automated network-based test system for set top box devices|
US10237593B2|2016-05-26|2019-03-19|Telefonaktiebolaget Lm Ericsson |Monitoring quality of experienceat audio/videoendpoints using a no-referencemethod|
US9848233B1|2016-07-26|2017-12-19|Contect, LLC|Set top box and customer premise equipmentunit test controller|
US10235260B1|2016-07-26|2019-03-19|Contec, Llc|Set top box and customer premise equipmentunit test controller|
US9872070B1|2016-07-26|2018-01-16|Contec, Llc|Customer premise equipmentand set top box quality control test system providing scalability and performance|
US10230945B2|2016-10-17|2019-03-12|Accenture Global Solutions Limited|Dynamic loading and deployment of test files to prevent interruption of test execution|
US10284456B2|2016-11-10|2019-05-07|Contec, Llc|Systems and methods for testing electronic devices using master-slave test architectures|
KR20190063277A|2017-11-29|2019-06-07|삼성전자주식회사|The Electronic Device Recognizing the Text in the Image|
法律状态:
2018-05-08| B25A| Requested transfer of rights approved|Owner name: S3 TV TECHNOLOGY LIMITED (IE) |
2018-05-22| B25A| Requested transfer of rights approved|Owner name: ACCENTURE GLOBAL SOLUTIONS LIMITED (IE) |
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-03-24| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-08-17| B350| Update of information on the portal [chapter 15.35 patent gazette]|
2021-09-08| B07A| Application suspended after technical examination (opinion) [chapter 7.1 patent gazette]|
2021-12-28| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2022-02-01| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 22/12/2011, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
申请号 | 申请日 | 专利标题
GB1020101.0|2010-11-26|
GB1020101.0A|GB2485833A|2010-11-26|2010-11-26|Improved OCR Using Configurable Filtering for Analysing Set Top Boxes|
PCT/EP2011/073802|WO2012069662A1|2010-11-26|2011-12-22|Improved ocr for automated stb testing|
[返回顶部]